Brain Tumor Detection
¶


By Mohamed Jamyl¶

http://linkedin.com/in/mohamed-jamyl

https://www.kaggle.com/mohamedjamyl

https://github.com/Mohamed-Jamyl


Project Overview
¶

Early detection and classification of brain tumors is an important research domain in the field of medical imaging and accordingly helps in selecting the most convenient treatment method to save patients life therefore.¶


1- Glioma¶

  • Definition : A common type of brain tumor that starts in the glial cells, which support nerve cells in the brain and spinal cord.¶

  • Characteristics:¶

    • Can be benign or malignant.¶
    • Grows within brain tissue.¶
    • Symptoms include headaches, seizures, and difficulty with thinking or movement.¶

2- Meningioma¶

  • Definition: Originates from the meninges, the membranes that cover the brain and spinal cord.¶

  • Characteristics:¶

    • Often benign (non-cancerous).¶
    • May press on the brain or nerves as it grows.¶
    • Symptoms depend on its location and may include headaches, vision changes, or behavioral changes.¶

3- Notumor¶

  • Definition : No tumor is detected in the image.¶

  • Purpose : This category is used as a reference or control, helping the model learn to distinguish between healthy and tumorous images.¶

4- Pituitary¶

  • Definition : A tumor growing in the pituitary gland, a small gland at the base of the brain that controls hormones.¶

  • Characteristics :¶

    • Can affect hormone production.¶
    • Symptoms may include vision problems, headaches, or hormonal imbalances (e.g., weight gain, menstrual issues).¶


Import Libraries¶

In [ ]:
import os
import numpy as np
import matplotlib.pyplot as plt
import cv2
from sklearn.model_selection import train_test_split

import tensorflow as tf
import keras

from sklearn.metrics import classification_report, confusion_matrix, ConfusionMatrixDisplay

from tensorflow.keras.applications import ResNet50
from tensorflow.keras.applications.resnet50 import preprocess_input
from tensorflow.keras.preprocessing import image

from tqdm import tqdm
%matplotlib inline

import warnings
warnings.filterwarnings('ignore')


In [3]:
train_data = '/kaggle/input/brain-tumor-mri-dataset/Training'
categories = ['glioma','meningioma','notumor','pituitary']
In [4]:
folds = [os.path.join(train_data, catg) for catg in categories]
folds
Out[4]:
['/kaggle/input/brain-tumor-mri-dataset/Training/glioma',
 '/kaggle/input/brain-tumor-mri-dataset/Training/meningioma',
 '/kaggle/input/brain-tumor-mri-dataset/Training/notumor',
 '/kaggle/input/brain-tumor-mri-dataset/Training/pituitary']
In [5]:
for fold in folds:
    print(fold.split('/')[5], ':' ,len(os.listdir(fold)))
glioma : 1321
meningioma : 1339
notumor : 1595
pituitary : 1457
In [6]:
# Data extracted from output
labels = ['glioma', 'meningioma', 'notumor', 'pituitary']
values = [1321, 1339, 1595, 1457]

plt.figure(figsize=(12, 5))
plt.bar(labels, values, color='blue')
plt.title('Number of Images per Tumor Class')
plt.xlabel('Tumor Type')
plt.ylabel('Number of Images')
plt.grid(axis='y', linestyle='--', alpha=0.7)
plt.tight_layout()
plt.show()
No description has been provided for this image

Image array and dimension¶

In [7]:
def img_array_dim(folder):
    i = 0
    for img in os.listdir(folder):
        i +=1
        img_array =	cv2.imread(os.path.join(folder, img))
        print(f'Dim : {img_array.shape}')
        print('-------------------')
        print(img_array)
        if i == 1:
            break
In [8]:
# glioma
img_array_dim(folds[0])
Dim : (512, 512, 3)
-------------------
[[[0 0 0]
  [0 0 0]
  [0 0 0]
  ...
  [0 0 0]
  [0 0 0]
  [0 0 0]]

 [[0 0 0]
  [0 0 0]
  [0 0 0]
  ...
  [0 0 0]
  [0 0 0]
  [0 0 0]]

 [[0 0 0]
  [0 0 0]
  [0 0 0]
  ...
  [0 0 0]
  [0 0 0]
  [0 0 0]]

 ...

 [[0 0 0]
  [0 0 0]
  [0 0 0]
  ...
  [0 0 0]
  [0 0 0]
  [0 0 0]]

 [[0 0 0]
  [0 0 0]
  [0 0 0]
  ...
  [0 0 0]
  [0 0 0]
  [0 0 0]]

 [[0 0 0]
  [0 0 0]
  [0 0 0]
  ...
  [0 0 0]
  [0 0 0]
  [0 0 0]]]
In [9]:
# meningioma
img_array_dim(folds[1])
Dim : (512, 512, 3)
-------------------
[[[0 0 0]
  [0 0 0]
  [0 0 0]
  ...
  [0 0 0]
  [0 0 0]
  [0 0 0]]

 [[0 0 0]
  [0 0 0]
  [0 0 0]
  ...
  [0 0 0]
  [0 0 0]
  [0 0 0]]

 [[0 0 0]
  [0 0 0]
  [0 0 0]
  ...
  [0 0 0]
  [0 0 0]
  [0 0 0]]

 ...

 [[0 0 0]
  [0 0 0]
  [0 0 0]
  ...
  [0 0 0]
  [0 0 0]
  [0 0 0]]

 [[0 0 0]
  [0 0 0]
  [0 0 0]
  ...
  [0 0 0]
  [0 0 0]
  [0 0 0]]

 [[0 0 0]
  [0 0 0]
  [0 0 0]
  ...
  [0 0 0]
  [0 0 0]
  [0 0 0]]]
In [10]:
# notumor
img_array_dim(folds[2])
Dim : (225, 225, 3)
-------------------
[[[0 0 0]
  [0 0 0]
  [0 0 0]
  ...
  [0 0 0]
  [0 0 0]
  [0 0 0]]

 [[0 0 0]
  [0 0 0]
  [0 0 0]
  ...
  [0 0 0]
  [0 0 0]
  [0 0 0]]

 [[0 0 0]
  [0 0 0]
  [0 0 0]
  ...
  [0 0 0]
  [0 0 0]
  [0 0 0]]

 ...

 [[0 0 0]
  [0 0 0]
  [0 0 0]
  ...
  [0 0 0]
  [0 0 0]
  [0 0 0]]

 [[0 0 0]
  [0 0 0]
  [0 0 0]
  ...
  [0 0 0]
  [0 0 0]
  [0 0 0]]

 [[0 0 0]
  [0 0 0]
  [0 0 0]
  ...
  [0 0 0]
  [0 0 0]
  [0 0 0]]]
In [11]:
# pituitary
img_array_dim(folds[3])
Dim : (512, 512, 3)
-------------------
[[[0 0 0]
  [0 0 0]
  [0 0 0]
  ...
  [0 0 0]
  [0 0 0]
  [0 0 0]]

 [[0 0 0]
  [0 0 0]
  [0 0 0]
  ...
  [0 0 0]
  [0 0 0]
  [0 0 0]]

 [[0 0 0]
  [0 0 0]
  [0 0 0]
  ...
  [0 0 0]
  [0 0 0]
  [0 0 0]]

 ...

 [[0 0 0]
  [0 0 0]
  [0 0 0]
  ...
  [0 0 0]
  [0 0 0]
  [0 0 0]]

 [[0 0 0]
  [0 0 0]
  [0 0 0]
  ...
  [0 0 0]
  [0 0 0]
  [0 0 0]]

 [[0 0 0]
  [0 0 0]
  [0 0 0]
  ...
  [0 0 0]
  [0 0 0]
  [0 0 0]]]

Show images¶

In [12]:
def show_img(folder):
    images = [cv2.imread(os.path.join(folder, img)) for img in os.listdir(folder)]
    fig = plt.figure(figsize=(14, 6))
    x = 0 
    for i in range(len(images)):
        x+=1
        plt.subplot(2,4,i+1)
        plt.imshow(images[i])
        plt.axis('off')
        plt.title(f'Image {i+1}') 
        if x == 8:
            break
In [13]:
# glioma 
show_img(folds[0])
No description has been provided for this image
In [14]:
# meningioma
show_img(folds[1])
No description has been provided for this image
In [15]:
# notumor
show_img(folds[2])
No description has been provided for this image
In [16]:
# pituitary
show_img(folds[3])
No description has been provided for this image

Show images with IR¶

In [17]:
def show_img_with_IR(folder):
    images = [cv2.imread(os.path.join(folder, img)) for img in os.listdir(folder)]
    fig = plt.figure(figsize=(14, 6))
    x = 0 
    for i in range(len(images)):
        x+=1
        plt.subplot(2,4,i+1)
        img_gray = cv2.cvtColor(images[i], cv2.COLOR_BGR2GRAY)
        img_colored = cv2.applyColorMap(img_gray, cv2.COLORMAP_JET)
        plt.imshow(img_colored)
        plt.axis('off')
        plt.title(f'Image {i+1}') 
        if x == 8:
            break
In [18]:
# glioma
show_img_with_IR(folds[0])
No description has been provided for this image
In [19]:
# meningioma
show_img_with_IR(folds[1])
No description has been provided for this image
In [20]:
# notumor
show_img_with_IR(folds[2])
No description has been provided for this image
In [21]:
# pituitary
show_img_with_IR(folds[3])
No description has been provided for this image

Detecting tumor¶

In [22]:
def detect_tumor(folder):
    images = [cv2.imread(os.path.join(folder, img)) for img in os.listdir(folder)]
    fig = plt.figure(figsize=(14, 6))
    
    for i in range(2):
        plt.subplot(1,2,i+1)
        img_gray = cv2.cvtColor(images[i], cv2.COLOR_BGR2GRAY)
        _, thresh = cv2.threshold(img_gray, 120, 255, cv2.THRESH_BINARY)
        contours, _ = cv2.findContours(thresh, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
        img_colored = cv2.applyColorMap(img_gray, cv2.COLORMAP_JET)
        for contour in contours:
            if cv2.contourArea(contour) > 100:  
                (x, y), radius = cv2.minEnclosingCircle(contour)
                center = (int(x), int(y))
                radius = int(radius)
                cv2.circle(img_colored, center, radius, (0, 0, 255), 2)  
        
        plt.imshow(img_colored)
        plt.axis('off')
        plt.title(f'Image {i+1}') 
        
    plt.tight_layout()
    plt.show()
In [23]:
# glioma
detect_tumor(folds[0])
No description has been provided for this image
In [24]:
# meningioma
detect_tumor(folds[1])
No description has been provided for this image
In [25]:
# notumor
detect_tumor(folds[2])
No description has been provided for this image
In [26]:
# pituitary
detect_tumor(folds[3])
No description has been provided for this image

Checking sizes of images¶

In [27]:
def checking_size(folder):
    AllSizes = []
    for	img	in os.listdir(folder):
        img_array = cv2.imread(os.path.join(folder, img))
        AllSizes.append(img_array.shape)
    print(set(AllSizes))
In [28]:
checking_size(folds[0])
{(512, 512, 3)}
In [29]:
checking_size(folds[1])
{(395, 367, 3), (1019, 1149, 3), (219, 224, 3), (237, 212, 3), (251, 205, 3), (245, 206, 3), (442, 442, 3), (395, 369, 3), (314, 329, 3), (212, 238, 3), (527, 552, 3), (354, 318, 3), (284, 324, 3), (218, 180, 3), (234, 216, 3), (264, 420, 3), (235, 214, 3), (216, 234, 3), (238, 212, 3), (546, 472, 3), (337, 305, 3), (439, 645, 3), (223, 200, 3), (395, 416, 3), (236, 200, 3), (223, 226, 3), (358, 314, 3), (342, 323, 3), (412, 300, 3), (224, 239, 3), (526, 530, 3), (340, 507, 3), (236, 213, 3), (624, 491, 3), (270, 250, 3), (554, 554, 3), (377, 341, 3), (398, 351, 3), (372, 341, 3), (252, 200, 3), (345, 338, 3), (500, 455, 3), (993, 825, 3), (354, 289, 3), (225, 209, 3), (369, 503, 3), (512, 512, 3), (410, 304, 3), (306, 278, 3), (214, 226, 3), (345, 300, 3), (339, 290, 3), (237, 213, 3), (396, 402, 3), (228, 221, 3), (394, 295, 3), (355, 330, 3), (235, 200, 3), (674, 534, 3), (427, 441, 3), (240, 210, 3), (398, 369, 3), (306, 306, 3), (249, 206, 3), (320, 497, 3), (320, 257, 3), (234, 219, 3), (356, 286, 3), (362, 507, 3), (365, 306, 3), (248, 257, 3), (342, 290, 3), (315, 315, 3), (307, 257, 3), (216, 224, 3), (359, 300, 3), (423, 630, 3), (354, 298, 3), (258, 300, 3), (393, 313, 3), (395, 366, 3), (239, 211, 3), (480, 480, 3), (1427, 1275, 3), (303, 252, 3), (325, 254, 3), (256, 256, 3), (308, 262, 3), (510, 485, 3), (336, 264, 3), (241, 230, 3), (251, 201, 3), (216, 214, 3), (234, 215, 3), (249, 204, 3), (243, 208, 3), (446, 450, 3), (522, 513, 3), (216, 216, 3), (367, 343, 3), (261, 232, 3), (605, 507, 3), (401, 312, 3), (220, 215, 3), (225, 225, 3), (326, 273, 3), (241, 209, 3), (340, 291, 3), (248, 239, 3), (232, 217, 3), (650, 591, 3), (581, 528, 3), (212, 226, 3), (207, 201, 3), (331, 272, 3), (223, 229, 3), (370, 374, 3), (396, 411, 3), (690, 722, 3), (223, 202, 3), (456, 374, 3), (395, 341, 3), (341, 315, 3)}
In [30]:
checking_size(folds[2])
{(488, 504, 3), (263, 236, 3), (290, 236, 3), (717, 717, 3), (444, 468, 3), (243, 236, 3), (302, 216, 3), (674, 648, 3), (210, 233, 3), (530, 380, 3), (1446, 1375, 3), (512, 434, 3), (250, 201, 3), (274, 230, 3), (251, 447, 3), (252, 236, 3), (442, 442, 3), (249, 201, 3), (320, 296, 3), (228, 233, 3), (218, 233, 3), (725, 728, 3), (483, 430, 3), (248, 208, 3), (480, 853, 3), (600, 600, 3), (484, 405, 3), (417, 428, 3), (506, 444, 3), (223, 236, 3), (630, 630, 3), (614, 630, 3), (344, 320, 3), (228, 235, 3), (273, 236, 3), (236, 236, 3), (243, 200, 3), (504, 450, 3), (244, 206, 3), (470, 469, 3), (273, 251, 3), (214, 229, 3), (496, 453, 3), (474, 356, 3), (264, 235, 3), (248, 203, 3), (1024, 1024, 3), (295, 236, 3), (277, 235, 3), (217, 232, 3), (600, 494, 3), (247, 204, 3), (216, 232, 3), (238, 212, 3), (262, 224, 3), (280, 229, 3), (216, 236, 3), (244, 262, 3), (201, 210, 3), (236, 211, 3), (236, 255, 3), (490, 410, 3), (201, 236, 3), (442, 332, 3), (218, 225, 3), (777, 622, 3), (469, 387, 3), (257, 235, 3), (264, 210, 3), (1080, 1920, 3), (225, 234, 3), (222, 233, 3), (315, 236, 3), (486, 421, 3), (260, 314, 3), (512, 416, 3), (258, 314, 3), (583, 1000, 3), (830, 1024, 3), (848, 785, 3), (693, 800, 3), (442, 400, 3), (418, 364, 3), (202, 216, 3), (415, 339, 3), (259, 225, 3), (301, 275, 3), (243, 207, 3), (225, 207, 3), (350, 350, 3), (340, 339, 3), (280, 212, 3), (300, 227, 3), (252, 200, 3), (280, 278, 3), (212, 220, 3), (1075, 890, 3), (592, 562, 3), (262, 227, 3), (282, 230, 3), (257, 236, 3), (649, 926, 3), (512, 512, 3), (222, 212, 3), (496, 411, 3), (239, 253, 3), (278, 208, 3), (357, 236, 3), (450, 600, 3), (234, 218, 3), (361, 642, 3), (250, 202, 3), (605, 600, 3), (261, 235, 3), (575, 626, 3), (269, 236, 3), (508, 470, 3), (228, 236, 3), (512, 439, 3), (294, 236, 3), (872, 850, 3), (251, 236, 3), (223, 224, 3), (248, 200, 3), (213, 210, 3), (220, 330, 3), (231, 218, 3), (233, 235, 3), (218, 234, 3), (222, 227, 3), (452, 355, 3), (168, 300, 3), (513, 502, 3), (243, 203, 3), (226, 233, 3), (309, 236, 3), (442, 409, 3), (613, 605, 3), (213, 227, 3), (248, 217, 3), (745, 850, 3), (214, 205, 3), (239, 236, 3), (234, 210, 3), (262, 236, 3), (244, 235, 3), (221, 236, 3), (714, 1000, 3), (502, 438, 3), (235, 230, 3), (250, 236, 3), (454, 442, 3), (216, 235, 3), (257, 221, 3), (200, 200, 3), (197, 177, 3), (229, 235, 3), (194, 259, 3), (275, 220, 3), (280, 236, 3), (236, 214, 3), (442, 441, 3), (243, 233, 3), (217, 208, 3), (201, 173, 3), (242, 208, 3), (228, 228, 3), (252, 222, 3), (214, 235, 3), (366, 236, 3), (512, 446, 3), (232, 236, 3), (480, 852, 3), (220, 236, 3), (227, 235, 3), (198, 150, 3), (332, 590, 3), (256, 256, 3), (215, 235, 3), (257, 196, 3), (244, 201, 3), (600, 652, 3), (251, 201, 3), (328, 267, 3), (234, 215, 3), (286, 224, 3), (221, 228, 3), (509, 452, 3), (208, 233, 3), (218, 236, 3), (602, 655, 3), (253, 278, 3), (501, 411, 3), (268, 236, 3), (686, 626, 3), (304, 235, 3), (213, 236, 3), (203, 236, 3), (248, 237, 3), (293, 216, 3), (500, 500, 3), (211, 219, 3), (496, 414, 3), (210, 200, 3), (824, 755, 3), (519, 600, 3), (300, 236, 3), (351, 321, 3), (249, 205, 3), (225, 225, 3), (540, 504, 3), (234, 234, 3), (224, 234, 3), (485, 407, 3), (528, 528, 3), (585, 629, 3), (781, 733, 3), (750, 750, 3), (259, 194, 3), (236, 212, 3), (310, 329, 3), (449, 359, 3), (406, 331, 3), (501, 456, 3), (832, 825, 3), (270, 236, 3), (680, 680, 3), (260, 236, 3), (213, 226, 3), (198, 254, 3), (664, 550, 3), (226, 236, 3), (234, 209, 3), (183, 275, 3), (393, 350, 3), (851, 724, 3), (400, 393, 3), (326, 276, 3), (489, 416, 3), (222, 236, 3), (225, 208, 3), (231, 236, 3), (537, 472, 3), (398, 497, 3), (280, 420, 3), (192, 192, 3), (274, 244, 3), (443, 354, 3), (207, 207, 3), (242, 209, 3), (380, 336, 3), (424, 417, 3), (720, 1280, 3), (240, 236, 3)}
In [31]:
checking_size(folds[3])
{(1280, 1280, 3), (903, 721, 3), (502, 502, 3), (400, 400, 3), (474, 474, 3), (512, 512, 3), (432, 470, 3), (681, 685, 3), (900, 940, 3), (378, 360, 3), (442, 442, 3), (210, 201, 3), (202, 202, 3), (741, 900, 3), (1365, 1365, 3), (256, 256, 3)}

Resizing images¶

In [32]:
width, height = 200, 200
def resizing_img(folder):
    images = [cv2.resize((cv2.imread(os.path.join(folder, img))), (width, height)) for img in os.listdir(folder)]
    fig = plt.figure(figsize=(14, 6))
    x = 0 
    for i in range(len(images)):
        x+=1
        plt.subplot(2,4,i+1)
        plt.imshow(images[i])
        plt.axis('off')
        plt.title(f'Image {i+1}') 
        if x == 8:
            break
In [33]:
resizing_img(folds[0])
No description has been provided for this image
In [34]:
resizing_img(folds[1])
No description has been provided for this image
In [35]:
resizing_img(folds[2])
No description has been provided for this image
In [36]:
resizing_img(folds[3])
No description has been provided for this image

Creating x and y¶

In [37]:
x=[]
y=[]
for label, fold in enumerate(folds):
    for	img_name in	tqdm(os.listdir(fold)):
        img_path = os.path.join(fold, img_name)
        img	= cv2.imread(img_path, cv2.IMREAD_GRAYSCALE)
        img	= cv2.resize(img, (width, height))
        x.append(img)
        y.append(label)
100%|██████████| 1321/1321 [00:11<00:00, 118.92it/s]
100%|██████████| 1339/1339 [00:10<00:00, 131.47it/s]
100%|██████████| 1595/1595 [00:11<00:00, 141.69it/s]
100%|██████████| 1457/1457 [00:09<00:00, 153.13it/s]
In [38]:
len(x)
Out[38]:
5712
In [39]:
len(y)
Out[39]:
5712
In [40]:
x[0].shape
Out[40]:
(200, 200)
In [41]:
x =	np.array(x).reshape(-1,	width, height, 1)	
y =	np.array(y)
In [42]:
x[0].shape
Out[42]:
(200, 200, 1)
In [43]:
x[:1]
Out[43]:
array([[[[0],
         [0],
         [0],
         ...,
         [0],
         [0],
         [0]],

        [[1],
         [1],
         [1],
         ...,
         [0],
         [0],
         [0]],

        [[2],
         [2],
         [2],
         ...,
         [0],
         [0],
         [0]],

        ...,

        [[2],
         [2],
         [2],
         ...,
         [0],
         [0],
         [0]],

        [[1],
         [1],
         [1],
         ...,
         [0],
         [0],
         [0]],

        [[0],
         [0],
         [0],
         ...,
         [0],
         [0],
         [0]]]], dtype=uint8)
In [44]:
y
Out[44]:
array([0, 0, 0, ..., 3, 3, 3])
In [45]:
y[:1]
Out[45]:
array([0])

Spliting Data¶


In [46]:
 x_train, x_test, y_train, y_test =	train_test_split(x,y, train_size=0.8,random_state=1234)
 print(x_train.shape)
 print(x_test.shape)
 print(y_train.shape)
 print(y_test.shape)
(4569, 200, 200, 1)
(1143, 200, 200, 1)
(4569,)
(1143,)

Building Convolutional Neural Network¶


In [54]:
model = keras.models.Sequential([
        keras.layers.Conv2D(filters = 32, kernel_size = (3,3), strides = (1,1), padding = 'VALID', input_shape = (height, width, 1)),
        keras.layers.BatchNormalization(),
        keras.layers.MaxPooling2D(pool_size = (2,2), strides = (1,1)),

        keras.layers.Conv2D(filters = 32, kernel_size = (4,4), strides = (1,1), padding = 'VALID'),
        keras.layers.BatchNormalization(),
        keras.layers.Activation('relu'),
        keras.layers.MaxPooling2D(pool_size = (2,2), strides = (2,2)),

        keras.layers.Flatten(),
        keras.layers.Dense(32, activation='relu',kernel_regularizer=keras.regularizers.l2(0.01)),
        keras.layers.Dense(64, activation = tf.nn.relu),
        #layers.Dropout(0.2),
        keras.layers.Dense(4, activation = tf.nn.softmax),
        ])
In [55]:
model.compile(
    optimizer=keras.optimizers.Adam(learning_rate=0.001),
    loss='sparse_categorical_crossentropy',
    metrics=['accuracy'])
In [56]:
model.summary()
Model: "sequential_1"
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━┓
┃ Layer (type)                    ┃ Output Shape           ┃       Param # ┃
┡━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━┩
│ conv2d_2 (Conv2D)               │ (None, 198, 198, 32)   │           320 │
├─────────────────────────────────┼────────────────────────┼───────────────┤
│ batch_normalization_2           │ (None, 198, 198, 32)   │           128 │
│ (BatchNormalization)            │                        │               │
├─────────────────────────────────┼────────────────────────┼───────────────┤
│ max_pooling2d_2 (MaxPooling2D)  │ (None, 197, 197, 32)   │             0 │
├─────────────────────────────────┼────────────────────────┼───────────────┤
│ conv2d_3 (Conv2D)               │ (None, 194, 194, 32)   │        16,416 │
├─────────────────────────────────┼────────────────────────┼───────────────┤
│ batch_normalization_3           │ (None, 194, 194, 32)   │           128 │
│ (BatchNormalization)            │                        │               │
├─────────────────────────────────┼────────────────────────┼───────────────┤
│ activation_1 (Activation)       │ (None, 194, 194, 32)   │             0 │
├─────────────────────────────────┼────────────────────────┼───────────────┤
│ max_pooling2d_3 (MaxPooling2D)  │ (None, 97, 97, 32)     │             0 │
├─────────────────────────────────┼────────────────────────┼───────────────┤
│ flatten_1 (Flatten)             │ (None, 301088)         │             0 │
├─────────────────────────────────┼────────────────────────┼───────────────┤
│ dense_3 (Dense)                 │ (None, 32)             │     9,634,848 │
├─────────────────────────────────┼────────────────────────┼───────────────┤
│ dense_4 (Dense)                 │ (None, 64)             │         2,112 │
├─────────────────────────────────┼────────────────────────┼───────────────┤
│ dense_5 (Dense)                 │ (None, 4)              │           260 │
└─────────────────────────────────┴────────────────────────┴───────────────┘
 Total params: 9,654,212 (36.83 MB)
 Trainable params: 9,654,084 (36.83 MB)
 Non-trainable params: 128 (512.00 B)

Training Model¶


In [57]:
early_stopping = keras.callbacks.EarlyStopping(	
				monitor='val_accuracy',	
				patience=5,										
				restore_best_weights=True)

history=model.fit(	
		x_train,y_train,	
        batch_size=60,
		steps_per_epoch	=60,
		epochs=50,	
        validation_split=0.1,
		verbose=1,	
		callbacks=[early_stopping])
Epoch 1/50
60/60 ━━━━━━━━━━━━━━━━━━━━ 453s 7s/step - accuracy: 0.5073 - loss: 8.8183 - val_accuracy: 0.3457 - val_loss: 12.2278
Epoch 2/50
60/60 ━━━━━━━━━━━━━━━━━━━━ 72s 1s/step - accuracy: 0.7231 - loss: 2.5939 - val_accuracy: 0.4398 - val_loss: 12.5582
Epoch 3/50
60/60 ━━━━━━━━━━━━━━━━━━━━ 476s 8s/step - accuracy: 0.8411 - loss: 1.4689 - val_accuracy: 0.7681 - val_loss: 1.3225
Epoch 4/50
60/60 ━━━━━━━━━━━━━━━━━━━━ 76s 1s/step - accuracy: 0.8231 - loss: 1.0884 - val_accuracy: 0.8162 - val_loss: 1.3318
Epoch 5/50
60/60 ━━━━━━━━━━━━━━━━━━━━ 474s 8s/step - accuracy: 0.8957 - loss: 0.8406 - val_accuracy: 0.8009 - val_loss: 1.1537
Epoch 6/50
60/60 ━━━━━━━━━━━━━━━━━━━━ 73s 1s/step - accuracy: 0.8841 - loss: 0.7305 - val_accuracy: 0.8928 - val_loss: 0.7779
Epoch 7/50
60/60 ━━━━━━━━━━━━━━━━━━━━ 479s 8s/step - accuracy: 0.8927 - loss: 0.6666 - val_accuracy: 0.8490 - val_loss: 1.0729
Epoch 8/50
60/60 ━━━━━━━━━━━━━━━━━━━━ 73s 1s/step - accuracy: 0.9011 - loss: 0.8951 - val_accuracy: 0.8512 - val_loss: 1.1015
Epoch 9/50
60/60 ━━━━━━━━━━━━━━━━━━━━ 482s 8s/step - accuracy: 0.9029 - loss: 0.7449 - val_accuracy: 0.7768 - val_loss: 1.1321
Epoch 10/50
60/60 ━━━━━━━━━━━━━━━━━━━━ 73s 1s/step - accuracy: 0.8964 - loss: 0.6544 - val_accuracy: 0.8359 - val_loss: 0.9086
Epoch 11/50
60/60 ━━━━━━━━━━━━━━━━━━━━ 481s 8s/step - accuracy: 0.9139 - loss: 0.6326 - val_accuracy: 0.8687 - val_loss: 0.7925

Model Evaluation¶


In [58]:
test_loss, test_accuracy = model.evaluate(x_test, y_test, verbose=1)

print(f"Test Loss: {test_loss:.4f}")
print(f"Test Accuracy: {test_accuracy:.4f}")
36/36 ━━━━━━━━━━━━━━━━━━━━ 25s 685ms/step - accuracy: 0.8772 - loss: 0.7764
Test Loss: 0.7529
Test Accuracy: 0.8758
In [62]:
# Loading Model
#from keras.models import load_model

#model.save('Brain_Tumor_Detection_model.h5')
#print("Model saved successfully!")
Model saved successfully!
In [59]:
plt.figure(figsize=(12, 5))

plt.subplot(1, 2, 1)
plt.plot(history.history['accuracy'], label='Train Accuracy', marker='o')
plt.plot(history.history['val_accuracy'], label='Val Accuracy', marker='x')
plt.title('Accuracy over Epochs')
plt.xlabel('Epoch')
plt.ylabel('Accuracy')
plt.legend()
plt.grid(True)

plt.subplot(1, 2, 2)
plt.plot(history.history['loss'], label='Train Loss', marker='o')
plt.plot(history.history['val_loss'], label='Val Loss', marker='x')
plt.title('Loss over Epochs')
plt.xlabel('Epoch')
plt.ylabel('Loss')
plt.legend()
plt.grid(True)

plt.tight_layout()
plt.show()
No description has been provided for this image

Based on these plots, the model appears to be training effectively. Both accuracy metrics are increasing, and both loss metrics are decreasing, which are desirable outcomes. The validation accuracy and loss generally follow the training metrics, suggesting that the model is learning to generalize well to unseen data without significant overfitting. The slight dip in validation accuracy at epoch 8 might warrant further investigation or more epochs, but overall, the model seems to be performing robustly. The model could likely be stopped around epoch 6-8 as the improvements become marginal after that point.¶


In [60]:
y_pred_probs = model.predict(x_test) 
y_pred = np.argmax(y_pred_probs, axis=1)  

y_true = y_test  
36/36 ━━━━━━━━━━━━━━━━━━━━ 25s 676ms/step
In [61]:
plt.figure(figsize = (12,6))
cm = confusion_matrix(y_true, y_pred)
disp = ConfusionMatrixDisplay(confusion_matrix=cm, display_labels=categories)
disp.plot(xticks_rotation='vertical', cmap='Blues')
plt.show()
<Figure size 1200x600 with 0 Axes>
No description has been provided for this image
  • The model performs very well in identifying "notumor" cases (293 correct, very few misclassifications to other classes from "notumor").¶
  • It also performs strongly for "pituitary" cases (281 correct).¶
  • For "glioma," the model correctly identifies a good number (228), but there are a notable number of misclassifications, especially confusing "glioma" with "meningioma" (35 cases).¶
  • Similarly, "meningioma" has a high number of correct predictions (199), but it is often confused with "glioma" (41 cases) and also has some confusion with "notumor" and "pituitary."¶

In [63]:
# Convert to Tensor
x_train = tf.convert_to_tensor(x_train, dtype=tf.float32)
x_test = tf.convert_to_tensor(x_test, dtype=tf.float32)
In [64]:
# Ensure shape is (N, H, W, 1)
if x_train.shape[-1] != 1:
    x_train = tf.expand_dims(x_train, axis=-1)
    x_test = tf.expand_dims(x_test, axis=-1)
In [ ]:
# Convert grayscale to RGB (1 to 3 channels)
x_train = tf.image.grayscale_to_rgb(tf.convert_to_tensor(x_train, dtype=tf.float32))
x_test = tf.image.grayscale_to_rgb(tf.convert_to_tensor(x_test, dtype=tf.float32))
In [66]:
# Resize all to 200x200 (for ResNet50)
width, height = 200,200
x_train = tf.image.resize(x_train, [width, height])
x_test = tf.image.resize(x_test, [width, height])
In [68]:
# Preprocess images for ResNet50
x_train_rgb = preprocess_input(x_train)
x_test_rgb = preprocess_input(x_test)
In [76]:
# One-hot encode the labels
if len(y_train.shape) == 1 or y_train.shape[-1] == 1:
    num_classes = len(np.unique(y_train))
    y_train = keras.utils.to_categorical(y_train, num_classes)
    y_test = keras.utils.to_categorical(y_test, num_classes)
In [77]:
# Create Dataset
batch_size = 32

train_dataset = tf.data.Dataset.from_tensor_slices((x_train_rgb, y_train))
train_dataset = train_dataset.shuffle(buffer_size=1000).batch(batch_size)

test_dataset = tf.data.Dataset.from_tensor_slices((x_test_rgb, y_test))
test_dataset = test_dataset.batch(batch_size)
In [78]:
# Load Pretrained ResNet50 (without top)
base_model = ResNet50(weights='imagenet', include_top=False, input_shape=(width, height, 3))
base_model.trainable = False  # freeze base
Downloading data from https://storage.googleapis.com/tensorflow/keras-applications/resnet/resnet50_weights_tf_dim_ordering_tf_kernels_notop.h5
94765736/94765736 ━━━━━━━━━━━━━━━━━━━━ 3s 0us/step
In [79]:
# Add custom head
x = base_model.output
x = keras.layers.GlobalAveragePooling2D()(x)
x = keras.layers.Dense(256, activation='relu')(x)
x = keras.layers.Dropout(0.3)(x)
output = keras.layers.Dense(num_classes, activation='softmax')(x)

model = keras.models.Model(inputs=base_model.input, outputs=output)
In [80]:
model.compile(optimizer=keras.optimizers.Adam(learning_rate=0.0001),
              loss='categorical_crossentropy',
              metrics=['accuracy'])
In [81]:
model.fit(train_dataset, epochs=10, validation_data=test_dataset)
Epoch 1/10
143/143 ━━━━━━━━━━━━━━━━━━━━ 383s 3s/step - accuracy: 0.5723 - loss: 1.0514 - val_accuracy: 0.8871 - val_loss: 0.3447
Epoch 2/10
143/143 ━━━━━━━━━━━━━━━━━━━━ 394s 3s/step - accuracy: 0.8641 - loss: 0.3776 - val_accuracy: 0.9186 - val_loss: 0.2787
Epoch 3/10
143/143 ━━━━━━━━━━━━━━━━━━━━ 376s 3s/step - accuracy: 0.8967 - loss: 0.2921 - val_accuracy: 0.9186 - val_loss: 0.2468
Epoch 4/10
143/143 ━━━━━━━━━━━━━━━━━━━━ 375s 3s/step - accuracy: 0.9175 - loss: 0.2311 - val_accuracy: 0.9300 - val_loss: 0.2196
Epoch 5/10
143/143 ━━━━━━━━━━━━━━━━━━━━ 382s 3s/step - accuracy: 0.9345 - loss: 0.1950 - val_accuracy: 0.9353 - val_loss: 0.2060
Epoch 6/10
143/143 ━━━━━━━━━━━━━━━━━━━━ 363s 3s/step - accuracy: 0.9444 - loss: 0.1742 - val_accuracy: 0.9274 - val_loss: 0.2130
Epoch 7/10
143/143 ━━━━━━━━━━━━━━━━━━━━ 392s 3s/step - accuracy: 0.9445 - loss: 0.1588 - val_accuracy: 0.9370 - val_loss: 0.1977
Epoch 8/10
143/143 ━━━━━━━━━━━━━━━━━━━━ 362s 3s/step - accuracy: 0.9541 - loss: 0.1450 - val_accuracy: 0.9493 - val_loss: 0.1746
Epoch 9/10
143/143 ━━━━━━━━━━━━━━━━━━━━ 373s 3s/step - accuracy: 0.9565 - loss: 0.1320 - val_accuracy: 0.9501 - val_loss: 0.1675
Epoch 10/10
143/143 ━━━━━━━━━━━━━━━━━━━━ 373s 3s/step - accuracy: 0.9599 - loss: 0.1190 - val_accuracy: 0.9466 - val_loss: 0.1689
Out[81]:
<keras.src.callbacks.history.History at 0x7e87d5c35ad0>
In [82]:
loss, acc = model.evaluate(test_dataset)
print(f"Test Accuracy: {acc:.2f}")
36/36 ━━━━━━━━━━━━━━━━━━━━ 72s 2s/step - accuracy: 0.9421 - loss: 0.1746
Test Accuracy: 0.95
In [87]:
# Loading ResNet50 Model
#from keras.models import load_model

#model.save('Brain_Tumor_Detection_ResNet50model.h5')
#print("Model saved successfully!")
Model saved successfully!
In [92]:
y_pred_probs = model.predict(test_dataset)
y_pred = np.argmax(y_pred_probs, axis=1)
y_true = np.argmax(y_test, axis=1)
36/36 ━━━━━━━━━━━━━━━━━━━━ 72s 2s/step
In [93]:
print(classification_report(y_true, y_pred, target_names=['glioma','meningioma','pituitary','notumor']))
              precision    recall  f1-score   support

      glioma       0.96      0.92      0.94       265
  meningioma       0.88      0.90      0.89       269
   pituitary       0.98      0.98      0.98       304
     notumor       0.96      0.98      0.97       305

    accuracy                           0.95      1143
   macro avg       0.95      0.94      0.94      1143
weighted avg       0.95      0.95      0.95      1143

  • The overall accuracy of the model is 0.95 (95%), which is very high.¶

  • The macro avg and weighted avg are also high at 0.94 and 0.95 respectively, confirming that the model performs consistently well across all classes, even when accounting for potential class imbalance.¶

  • The model is highly effective for this classification task, with its primary area for minor improvement being the classification of "meningioma" tumors.¶

In [96]:
cm = confusion_matrix(y_true, y_pred)
plt.figure(figsize=(10,4))
sns.heatmap(cm, annot=True, fmt='d', cmap='Blues', xticklabels=['glioma','meningioma','pituitary','notumor'],
            yticklabels=['glioma','meningioma','pituitary','notumor'])
plt.xlabel("Predicted")
plt.ylabel("True")
plt.title("Confusion Matrix")
plt.show()
No description has been provided for this image

The model shows strong performance overall, with high numbers on the diagonal for all classes. It is particularly effective at distinguishing "glioma" from "pituitary" and "notumor" (as indicated by the zeros in the first row's last two columns). The primary area of confusion for the model seems to be between "glioma" and "meningioma," where there are a number of misclassifications in both directions. The model also misclassifies "meningioma" as other classes more frequently than it misclassifies other classes as "meningioma."¶


Testing Data¶


In [ ]:
# Load your image (grayscale)
img_path = '/kaggle/input/brain-tumor-mri-dataset/Testing/glioma/Te-glTr_0000.jpg'  # Replace with actual path
img = tf.io.read_file(img_path)
img = tf.image.decode_image(img, channels=1)  # Grayscale

# Convert to RGB
img = tf.image.grayscale_to_rgb(img)

# Resize 
img = tf.image.resize(img, [width, height])

# Preprocess for ResNet
img = preprocess_input(img)

# Add batch dimension
img_batch = tf.expand_dims(img, axis=0)  # Shape: (1, width, height, 3)

# Predict
pred = model.predict(img_batch)
pred_class = np.argmax(pred)

# Display result
plt.imshow(tf.cast(img, tf.uint8))
plt.title(f"Predicted Class: {pred_class}")
plt.axis('off')
plt.show()
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 125ms/step
No description has been provided for this image


In [ ]:
 
In [ ]: